7 research outputs found

    Exploring the role of trust and expectations in CRI using in-the-wild studies

    Get PDF
    Studying interactions of children with humanoid robots in familiar spaces in natural contexts has become a key issue for social robotics. To fill this need, we conducted several Child-Robot Interaction (CRI) events with the Pepper robot in Polish and Japanese kindergartens. In this paper, we explore the role of trust and expectations towards the robot in determining the success of CRI. We present several observations from the video recordings of our CRI events and the transcripts of free-format question-answering sessions with the robot using the Wizard-of-Oz (WOZ) methodology. From these observations, we identify children’s behaviors that indicate trust (or lack thereof) towards the robot, e.g., challenging behavior of a robot or physical interactions with it. We also gather insights into children’s expectations, e.g., verifying expectations as a causal process and an agency or expectations concerning the robot’s relationships, preferences and physical and behavioral capabilities. Based on our experiences, we suggest some guidelines for designing more effective CRI scenarios. Finally, we argue for the effectiveness of in-the-wild methodologies for planning and executing qualitative CRI studies

    Detecting Gaze Direction Using Robot-Mounted and Mobile-Device Cameras

    Get PDF
    Two common channels through which humans communicate are speech andgaze. Eye gaze is an important mode of communication: it allows people tobetter understand each others’ intentions, desires, interests, and so on. The goalof this research is to develop a framework for gaze triggered events which canbe executed on a robot and mobile devices and allows to perform experiments.We experimentally evaluate the framework and techniques for extracting gazedirection based on a robot-mounted camera or a mobile-device camera whichare implemented in the framework. We investigate the impact of light on theaccuracy of gaze estimation, and also how the overall accuracy depends on usereye and head movements. Our research shows that the light intensity is im-portant, and the placement of light source is crucial. All the robot-mountedgaze detection modules we tested were found to be similar with regard to ac-curacy. The framework we developed was tested in a human-robot interactionexperiment involving a job-interview scenario. The flexible structure of thisscenario allowed us to test different components of the framework in variedreal-world scenarios, which was very useful for progressing towards our long-term research goal of designing intuitive gaze-based interfaces for human robotcommunication

    Gaze aversion in conversational settings: An investigation based on mock job interview

    Get PDF
    We report the results of an empirical study on gaze aversion during dyadic human-to-human conversation in an interview setting. To address various methodological challenges in as- sessing gaze-to-face contact, we followed an approach where the experiment was conducted twice, each time with a different set of interviewees. In one of them the interviewer’s gaze was tracked with an eye tracker, and in the other the interviewee’s gaze was tracked. The gaze sequences obtained in both experiments were analyzed and modeled as Discrete-Time Markov Chains. The results show that the interviewer made more frequent and longer gaze contacts compared to the interviewee. Also, the interviewer made mostly diagonal gaze aversions, whereas the interviewee made sideways aversions (left or right). We discuss the relevance of this research for Human-Robot Interaction, and discuss some future research problems

    Adapting Everyday Manipulation Skills to Varied Scenarios

    Get PDF
    This work is partially funded by: (1) AGH University of Science and Technology, grant No 15.11.230.318. (2) Deutsche Forschungsgemeinschaft (DFG) through the Collaborative Research Center 1320, EASE. (3) Elphinstone Scholarship from University of Aberdeen.Postprin

    Learning Symbolic User Models for Intrusion Detection: A Method and

    No full text
    Abstract. This paper briefly describes the LUS-MT method for automatically learning user signatures (models of computer users) from datastreams capturing users ’ interactions with computers. The signatures are in the form of collections of multistate templates (MTs), each characterizing a pattern in the user’s behavior. By applying the models to new user activities, the system can detect an imposter or verify legitimate user activity. Advantages of the method include the high expressive power of the models (a single template can characterize a large number of different user behaviors) and the ease of their interpretation, which makes possible their editing or enhancement by an expert. Initial results are very promising and show the potential of the method for user modeling.
    corecore